Special issue: Advances in learning schemes for function approximation
نویسندگان
چکیده
The eleven papers included in this special issue represent a selection of extended contributions presented at the 11th International Conference on Intelligent Systems Design and Applications (ISDA) held in Córdoba, Spain November 22–24, 2011. Papers were selected on the basis of fundamental ideas and concepts rather than the direct usage of well-established techniques. This special issue is then aimed at practitioners, researchers and postgraduate students, who are engaged in developing and applying, advanced Intelligent Systems to solving real-world problems in the Industrial and Environmental fields. The papers are organized as follows. In the first contribution, Barros et al., propose a novel BottomUp Oblique Decision-Tree Induction Framework called BUTIF. BUTIF does not rely on an impurity-measure for dividing nodes, since the data resulting from each split is known a priori. BUTIF allows the adoption of distinct clustering algorithms and binary classifiers, respectively, for generating the initial leaves of the tree and the splitting hyperplanes in its internal nodes. It is also capable of performing embedded feature selection, which may reduce the number of features in each hyperplane, thus improving model comprehension. Different from virtually every top-down decision-tree induction algorithm, BUTIF does not require the further execution of a pruning procedure in order to avoid overfitting, due to its bottom-up nature that does not overgrow the tree. Empirical results show the effectiveness of the proposed framework. The second contribution by Bolón-Canedo et al., propose an ensemble of filters for classification, aimed at achieving a good classification performance together with a reduction in the input dimensionality. This approach overcomes the problem of selecting an appropriate method for each problem at hand, as it is overly dependent on the characteristics of the datasets. The adequacy of using an ensemble of filters rather than a single filter was demonstrated on synthetic and real data, paving the way for its final application over a challenging scenario such as DNA microarray classification. Cruz-Ramírez et al., in the sequel present a study of the use of a multi-objective optimization approach in the context of ordinal classification and propose a new performance metric, the Maximum Mean Absolute Error (MMAE). MMAE considers per-class distribution of patterns and the magnitude of the errors, both issues being crucial for ordinal regression problems. In addition the authors empirically show that some of the performance metrics are competitive objectives, which justifies the use of multi-objective optimization strategies. In this study, a multi-objective evolutionary algorithm optimizes a artificial neural network ordinal model with different pairs of metrics combinations, concluding that the pair of the Mean Absolute Error (MAE) and the proposed MMAE is the most favorable. A study of the relationship between the metrics of this proposal is performed, and the graphical representation in the 2 dimensional space where the search of the evolutionary algorithm takes place is analyzed. The results obtained show a good classification performance, opening new lines of research in the evaluation and model selection of ordinal classifiers. In the fourth contribution, Cateni et al., present a novel resampling method for binary classification problems on imbalanced datasets combining an oversampling and an undersampling technique. Several tests have been developed aiming at assessing the efficiency of the proposed method. Four classifiers based, respectively, on Support Vector Machine, Decision Tree, labeled Self Organizing Map and Bayesian Classifiers have been developed and applied for binary classification on the following four datasets: a synthetic dataset, a widely used public dataset and two datasets coming from industrial applications. In the sequel, Ibañez et al., propose two greedy wrapper forward cost-sensitive selective naive Bayes approaches. Both approaches readjust the probability thresholds of each class to select the class with the minimum expected cost. The first algorithm (CS-SNBAccuracy) considers adding each variable to the model and measures the performance of the resulting model on the training data. In contrast, the second algorithm (CS-SNB-Cost) considers adding variables that reduce the misclassification cost, that is, the distance between the readjusted class and actual class. The authors tested the algorithms on the bibliometric indices prediction area. Considering the popularity of the well-known h-index, they have researched and built several prediction models to forecast the annual increase of the h-index for Neurosciences journals in a four-year time horizon. Results show that the approaches, particularly CS-SNB-Accuracy, often achieved higher accuracy values than other Bayesian classifiers. Furthermore, it has been also noted that the CS-SNB-Cost almost always achieved a lower average cost than the analyzed standard classifiers. These cost-sensitive selective naive Bayes approaches outperform the selective naive Bayes in terms of accuracy and average cost, so the cost-sensitive learning approach could be also applied in different probabilistic classification approaches. Sobrino et al., in the sixth paper approach causal questions with the aim of: (1) answering what-questions as identifying the cause of an effect; (2) answering how-questions as selecting an appropriate part of a mechanism that relates pairs of cause effect (3) answering why-questions as identifying central causes in the mechanismwhich answer how-questions. To automatically get answers to why-questions, the authors hypothesize that the deepest knowledge associated
منابع مشابه
Verification of an Evolutionary-based Wavelet Neural Network Model for Nonlinear Function Approximation
Nonlinear function approximation is one of the most important tasks in system analysis and identification. Several models have been presented to achieve an accurate approximation on nonlinear mathematics functions. However, the majority of the models are specific to certain problems and systems. In this paper, an evolutionary-based wavelet neural network model is proposed for structure definiti...
متن کاملForecasting the Tehran Stock market by Machine Learning Methods using a New Loss Function
Stock market forecasting has attracted so many researchers and investors that many studies have been done in this field. These studies have led to the development of many predictive methods, the most widely used of which are machine learning-based methods. In machine learning-based methods, loss function has a key role in determining the model weights. In this study a new loss function is ...
متن کاملFunction Approximation Approach for Robust Adaptive Control of Flexible joint Robots
This paper is concerned with the problem of designing a robust adaptive controller for flexible joint robots (FJR). Under the assumption of weak joint elasticity, FJR is firstly modeled and converted into singular perturbation form. The control law consists of a FAT-based adaptive control strategy and a simple correction term. The first term of the controller is used to stability of the slow dy...
متن کاملAPPROXIMATION OF STOCHASTIC PARABOLIC DIFFERENTIAL EQUATIONS WITH TWO DIFFERENT FINITE DIFFERENCE SCHEMES
We focus on the use of two stable and accurate explicit finite difference schemes in order to approximate the solution of stochastic partial differential equations of It¨o type, in particular, parabolic equations. The main properties of these deterministic difference methods, i.e., convergence, consistency, and stability, are separately developed for the stochastic cases.
متن کاملBiorthogonal wavelet-based full-approximation schemes for the numerical solution of elasto-hydrodynamic lubrication problems
Biorthogonal wavelet-based full-approximation schemes are introduced in this paper for the numerical solution of elasto-hydrodynamic lubrication line and point contact problems. The proposed methods give higher accuracy in terms of better convergence with low computational time, which have been demonstrated through the illustrative problems.
متن کاملA Sparse Greedy Self-Adaptive Algorithm for Classification of Data
Kernels have become an integral part of most data classification algorithms. However the kernel parameters are generally not optimized during learning. In this work a novel adaptive technique called Sequential Function Approximation (SFA) has been developed for classification that determines the values of the control and kernel hyper-parameters during learning. This tool constructs sparse radia...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neurocomputing
دوره 135 شماره
صفحات -
تاریخ انتشار 2014